410 research outputs found

    Heterogeneity: The Key to Achieve Power-Proportional Computing

    Get PDF
    International audienceThe Smart 2020 report on low carbon economy in the information age shows that 2% of the global CO2 footprint will come from ICT in 2020. Out of these, 18% will be caused by data-centers, while 45% will come from personal computers. Classical research to reduce this footprint usually focuses on new consolidation techniques for global data-centers. In reality, personal computers and private computing infrastructures are here to stay. They are subject to irregular workload, and are usually largely under-loaded. Most of these computers waste tremendous amount of energy as nearly half of their maximum power consumption comes from simply being switched on. The ideal situation would be to use proportional computers that use nearly 0W when lightly loaded. This article shows the gains of using a perfectly proportional hardware on different type of data-centers: 50% gains for the servers used during 98 World Cup, 20% to the already optimized Google servers. Gains would attain up to 80% for personal computers. As such perfect hardware still does not exist, a real platform composed of Intel I7, Intel Atom and Raspberry Pi is evaluated. Using this infrastructure, gains are of 20% for the World Cup data-center, 5% for Google data-centers and up to 60% for personal computers

    Thermal-aware cloud middleware to reduce cooling needs

    Get PDF
    International audienceAs we are living in a data-driven world and directed toward internet, the need for data enter is growing. The main limitation for building cloud infrastructures is their energy consumption. Moreover, their conception is not perfect because servers are not the ones consuming all the power, cooling systems are responsible for half of the consumption. Cooling costs can be reduced by intelligent scheduling, and in our case through virtual machines migrations. In this paper, we propose a dynamic reconfiguration based on evolution of temperatures and load of the servers. The idea is to share heat production to reduce cooling costs and consolidate the workload when possible to reduce servers costs. The challenge resides in satisfying these opposite objectives. We tested our algorithm on an experimental test bed, and achieve to cap the temperature of the data enter room while not forgetting to optimize the server use, and without impacting on applications performance

    DVFS governor for HPC: Higher, Faster, Greener

    Get PDF
    International audienceIn High Performance Computing, being respectful of the environment is usually secondary compared to performance: The faster, the better. As Exascale computing is in the spotlight, electric power concerns arise as current exascale projects might need too much power to even boot. A recent incentive (Exascale at maximum 20MW) shows that reality is catching up with HPC center designers. Beyond classical works on hardware infrastructure or at the middleware level, we do believe that system-level solutions have great potential for energy reduction. Moreover energy-reduction has often been neglected by the HPC community that focus mainly on raw computing performance. In the literature, energy savings is achieved mainly by two means: Either processor load is the only metric taken into account to reduce processors frequency and to ensure no impact on raw performances, Or processor frequency is managed only at task level outside the critical path. In this article we show that designing and implementing a DVFS (Dynamic Voltage and Frequency Scaling) mechanism based on instantaneous system values (here network activity) can save up to 25% of energy consumption while reducing marginally performance. In several cases, reducing energy consumption also leads to an increase in performances because of the thermal budget of recent processors. This work is validated with real experiments on a Linux cluster using the NAS Parallel Benchmark (NPB)

    Contes, nouvelles, chroniques et/ou mémoires? Le récit bref à la 1ère personne chez José Rodrigues Miguéis

    Get PDF
    José Rodrigues Miguéis publicou quatro recolhas de contos e novelas: Onde a Noite se Acaba (1946), Léah e Outras Histórias (1958), Gente da Terceira Classe (1962) et Pass(ç)os Confusos (1982). Nesse quadro ficcional com forte componente autobiográfica, as narrativas na primeira pessoa constituem uma cena de enunciação ambígua, mistura de ficção, crónicas e memórias

    Évaluation et optimisation de performance énergétique des centres de calcul

    Get PDF
    Cette habilitation vise à répondre à la question "Comment gérerefficacement un centre de calcul" en fournissant les outils théoriquesmais aussi pratiques nécessaires. Les centres de calculs sont au coeurde la vie d'une partie croissante de la population sans pour autant êtrevisibles, de par** la prédominance des services en ligne. Leurconsommation électrique est donc un enjeu d'actualité primordial, et lesera encore plus dans le futur prévisible. Gérer efficacement permetdonc d'optimiser la qualité de service tout en réduisant leurconsommation électrique. Ces travaux montrent les différents outilsnécessaires : Ceux liés à la mesure, autant techniquement que de par**la définition des métriques. Ensuite sont explorées les méthodes demodélisation de tels problèmes, ainsi que les techniques de résolutiondirectes ou approchées. L'étape suivante consiste à étudier différentesheuristiques permettant une résolution approchée mais rapide du problèmede la gestion d'un centre de calcul. Diverses validations, autantexpérimentales que basées sur des simulateurs améliorés serviront desupport à ces démonstrations. Une ouverture vers le futur, grâce aux"datacenter in a box" distribués, hétérogènes, coopératifs etmulti-échelles (spatiales et temporelles) conclue ces travaux

    Energy aware clouds scheduling using anti-load balancing algorithm : EACAB

    Get PDF
    International audienceCloud computing is a highly scalable and cost-effective infrastructure for running HPC, enterprise and Web applications. However rapid growth of the demand for computational power by scientific, business and web- applications has led to the creation of large-scale data centers consuming enormous amounts of electrical power. Hence, energy-efficient solutions are required to minimize their energy consumption. The objective of our approach is to reduce data center’s total energy consumption by controlling cloud applications’ overall resource usage while guarantying service level agreement. This article presents Energy aware clouds scheduling using anti-load balancing algorithm (EACAB). The proposed algorithm works by associating a credit value with each node. The credit of a node depends on its affinity to its jobs, its current workload and its communication behavior. Energy savings are achieved by continuous consolidation of VMs according to current utilization of resources, virtual network topologies established between VMs and thermal state of computing nodes. The experiment results show that the cloud application energy consumption and energy efficiency is being improved effectively

    Simulation énergétique de tâches distribuées avec changements dynamiques de fréquence

    Get PDF
    National audienceCes dernières années, de nombreuses recherches ont été menées dans le domaine de la simulation des systèmes distribués, afin d'analyser et de comprendre leur comportement. Certains de ces simulateurs se focalisent sur le problème d'ordonnancement de tâches, d'autres sont spécifiquement développés pour la modélisation du réseau et seulement peu d'entre eux proposent tous les outils nécessaires pour simuler la consommation énergétique d'une application, d'une machine ou d'un centre de calcul. Cet article décrit les outils qui doivent être intégrés dans un simulateur pour être en mesure de lancer des simulations destinées à améliorer le comportement énergétique des machines. L'accent est mis davantage sur le DVFS (Dynamic Voltage and Frequency Scaling) et sa mise en oeuvre dans CloudSim, le simula-teur qui a été utilisé pour les expériences décrites dans cet article, mais aussi sur la façon de simuler et la méthodologie adoptée pour assurer la qualité des mesures et des simulations

    Grid'5000 energy-aware experiments with DVFS

    Get PDF
    National audienceIn recent years, much research has been conducted in the area of energy efficiency in distributed systems. To analyze, understand and improve their behaviour, simulators provide useful tools, to achieve energy-aware simulation like DVFS (Dynamic Voltage and Frequency Scaling). This paper presents current work on Grid'5000 to deploy a specific distributed electromagnetic application called TLM (Transmission Line Matrix), using DVFS and power measurements. The aim is to launch different set of experiments using different DVFS configurations, and then compare simulations and real experiments results

    Energy - and Heat-aware HPC Benchmarks

    Get PDF
    International audienceTo evaluate data centers is tough. Several metrics are available to provide insight into their behaviour, but usually they are tested using simple benchmarks like LINPACK for HPC oriented data centers. A good choice of benchmarks is necessary to evaluate all the impact of applications on those data centers. One point that is often overlooked is their energy- and thermal-quality. To evaluate these qualities, adequate benchmarks are required from several points of view: from the nodes to the whole building. Classical benchmarks selection mainly focuses on time and raw performance. This article aims at shifting the focus towards an energy- and power-point of view. To this end, we select benchmarks able to evaluate data centers not only from this performance perspective, but also from the energy and thermal standpoint. We also provide insight into several classical benchmarks and method to select an adequate and small number of benchmarks in order to provide a sensible and minimum set of energy- and thermal-aware benchmarks for HPC systems

    A batch scheduler with high level components

    Get PDF
    In this article we present the design choices and the evaluation of a batch scheduler for large clusters, named OAR. This batch scheduler is based upon an original design that emphasizes on low software complexity by using high level tools. The global architecture is built upon the scripting language Perl and the relational database engine Mysql. The goal of the project OAR is to prove that it is possible today to build a complex system for ressource management using such tools without sacrificing efficiency and scalability. Currently, our system offers most of the important features implemented by other batch schedulers such as priority scheduling (by queues), reservations, backfilling and some global computing support. Despite the use of high level tools, our experiments show that our system has performances close to other systems. Furthermore, OAR is currently exploited for the management of 700 nodes (a metropolitan GRID) and has shown good efficiency and robustness
    • …
    corecore